A Journey Through Dissipation,Life, and Mind
A Journey Through Dissipation,
Life, and Mind
From the Laws of Thermodynamics to the Emergence of Consciousness
Joseph P. McFadden
Engineering Fellow, Zebra Technologies | Adjunct Professor of Mechanical Engineering, Fairfield University
The Holistic Analyst | McFaddenCAE.com | mcfadden@snet.net
April 2026
Building Intuition Before Equations
All thoughts and ideas are my own, formatted and expanded with Claude AI — not to be told what to write, but to debate and build upon the work.
Introduction
Hello, I am Joseph McFadden. I'm someone who refuses to accept surface-level explanations for the deepest questions. Why does energy flow? How did life emerge from simple chemistry? What is consciousness, really? These aren't just academic puzzles to me — they're invitations to understand our place in the universe.
Over the years, I've built a personal library of more than 1,200 books spanning mathematics to psychology, thermodynamics to neuroscience, from the arrow of time to the nature of entropy. I learned this year that there are fancy words for what I've been doing all my life: I'm a polymath autodidact — someone who learns across disciplines, driven by curiosity rather than curriculum. I don't just surf the web for answers; I consume books, I question deeply, I follow threads of understanding wherever they lead.
More recently, AI has become an extension of that library, a collaborative partner in exploration. What you're about to read is the result of countless hours questioning and interacting with AI partners like Claude and Grok — discussing, challenging, probing deeper into the why of things. Together, we've woven insights from physics, neuroscience, and philosophy into a unified vision: a journey from the fundamental laws of energy through the emergence of life and consciousness, revealing how we are not separate from the physical universe but rather its most sophisticated expression — energy flow that has learned to contemplate itself.
I trust you will find this exploration both entertaining and informative. Let's start our journey.
Prologue: The Mystery of Energy
Why does energy flow? Why does anything happen at all?
Energy. We know what it does. We can measure it, harness it, convert it from one form to another. We've built civilizations on our ability to manipulate it. Yet if you ask "What IS energy?", the answer dissolves into mathematics and circular definitions. Energy is the capacity to do work. Work is the transfer of energy. It's everywhere, in everything, underlying every process in the universe, yet fundamentally mysterious.
And here's an even deeper mystery: Why does energy flow? Why does anything happen at all? The universe could have been static, frozen, unchanging. But it's not. Energy cascades from high concentrations to low, from order to disorder, from the nuclear furnaces of stars to the cold void of space. This relentless flow, this thermodynamic imperative, is described by the second law of thermodynamics, perhaps the most fundamental law of nature we know. Entropy always increases.
But wait — if entropy always increases, if everything trends toward disorder, how did we get here? How did the universe, starting from a nearly uniform soup of particles after the Big Bang, produce stars, planets, oceans, molecules, cells, brains, and consciousness? How did order emerge from disorder?
The answer, paradoxically, is that order emerges precisely because of entropy. The same thermodynamic imperative that drives things toward disorder also, under the right conditions, drives the spontaneous emergence of complex, organized structures. These are called dissipative structures, and understanding them is the key to understanding everything from hurricanes to human consciousness.
This is the story of that journey: from the fundamental properties of energy, through the physics of far-from-equilibrium systems, to the origin of life, the evolution of brains, and ultimately to your ability to read and understand these words. It's a story about how thermodynamics isn't just about engines and entropy — it's about existence itself.
Part One: Thermodynamics and the Arrow of Time
Let's begin with the basics, with what we actually know about energy and its behavior.
The first law of thermodynamics is simple: energy is conserved. It can change forms — from chemical energy to heat, from potential energy to kinetic energy, from electromagnetic radiation to matter — but the total amount never changes. Energy cannot be created or destroyed.
The second law is where things get interesting, and strange, and profound. It states that in any isolated system, entropy tends to increase over time. Entropy is often described as disorder, but that's misleading. More accurately, entropy measures the number of microscopic configurations that correspond to a macroscopic state. A system with high entropy has many possible microscopic arrangements. A system with low entropy has few.
The second law means that systems naturally evolve from states with fewer possible arrangements to states with more possible arrangements. Why? Because if all microscopic states are equally probable, the system is overwhelmingly likely to be found in the macroscopic state with the most microscopic realizations. It's not that the universe "prefers" disorder — it's a statistical inevitability.
This gives us the arrow of time. Entropy increase is the only fundamental law of physics that distinguishes past from future. The laws of motion, electromagnetism, quantum mechanics — they're all time-reversible in principle. Play them backward and they still work. But entropy increase is different. It points from past to future, giving time its direction.
Now here's the crucial insight that changes everything: the second law applies to isolated systems, systems that don't exchange energy or matter with their surroundings. But most real systems aren't isolated. They're open systems, exchanging energy and matter with their environment. And in open systems far from thermodynamic equilibrium, something remarkable can happen.
Key Concept: Entropy
Entropy does not mean "disorder" in a colloquial sense. It measures the number of microscopic configurations that correspond to a macroscopic state. The second law says systems move toward the macroscopic state with the most possible microscopic realizations — a statistical inevitability, not a preference.
Part Two: Dissipative Structures and the Birth of Order
In the mid-twentieth century, Belgian physical chemist Ilya Prigogine revolutionized our understanding of thermodynamics by studying what happens in systems far from equilibrium — systems with energy flowing through them.
Prigogine showed that when you drive a system far from equilibrium by imposing a thermodynamic force — a temperature gradient, a chemical potential, a flux of photons — the system can spontaneously self-organize into complex structures. These structures form not despite the second law, but because of it. They emerge to enhance the dissipation of the imposed gradient, to speed up the flow of energy from high to low.
Consider a simple example: heat a pot of water from below. At first, the heat spreads by conduction, molecules randomly jiggling and transferring energy to their neighbors. But above a critical temperature difference, something dramatic happens. Convection cells spontaneously form — organized rotating flows of water, with hot water rising on one side and cool water sinking on the other. The water has self-organized.
Why? Because convection cells dissipate the temperature gradient more efficiently than conduction alone. They transport heat faster from the hot bottom to the cool top. The system has found a way to increase entropy production by creating organized structure. This is the paradox: order emerges to accelerate disorder.
Prigogine called these self-organizing structures "dissipative structures" because they exist only as long as energy flows through them, only as long as they continue dissipating the thermodynamic gradient. Cut off the energy supply, and they collapse back to equilibrium.
Hurricanes are dissipative structures, emerging to dissipate the temperature gradient between warm ocean surfaces and cold upper atmosphere. Chemical oscillations like the Belousov-Zhabotinsky reaction are dissipative structures, creating beautiful spiraling patterns to dissipate chemical potential. Lasers are dissipative structures, organizing photons into coherent beams to dissipate electromagnetic energy.
Recent experimental work has demonstrated that even simple physical systems, like electrically driven structures in oil, can exhibit remarkably life-like behaviors — motion toward energy sources, self-maintenance, adaptation, even something resembling goal-directed behavior. These bio-analog dissipative structures show that the boundary between living and non-living isn't as sharp as we once thought. The principles are continuous.
Physicist Jeremy England extended Prigogine's insight in a particularly provocative direction. England showed mathematically that groups of atoms driven by an external energy source will tend, over time, to rearrange themselves to better absorb and dissipate that very driving signal. They don't just organize to accelerate dissipation in general. They adapt to resonate with their specific driving force. England called this dissipation-driven adaptation, and it suggests something profound. Matter subjected to persistent energy flows will, under fairly general conditions, acquire structure that specifically matches and amplifies those flows. There is no replication required, no selection pressure in the biological sense. It is a kind of physical tuning, a proto-learning baked into the laws of thermodynamics themselves. When we later discuss how brains learn by minimizing prediction errors, it is worth carrying this thought with us. The roots of learning may reach all the way down to physics.
The universe, it turns out, is extraordinarily creative when there's energy to be dissipated. And nowhere is this creativity more spectacular than in the emergence of life itself.
Order emerges to accelerate disorder — the central paradox of dissipative structures.
Jeremy England: Dissipation-Driven Adaptation
England showed mathematically that driven systems don't just organize to dissipate in general — they adapt to resonate with their specific driving force. No replication required. This is a kind of physical tuning: proto-learning baked into the laws of thermodynamics. Nature Nanotechnology, 10, 919–923 (2015).
Part Three: The Thermodynamic Origin of Life
For decades, the origin of life seemed to violate thermodynamics. How could simple molecules spontaneously organize into the staggering complexity of even the simplest living cell? The second law seemed to forbid it.
But we now understand that life doesn't violate thermodynamics — it exemplifies it. Life is the ultimate dissipative structure.
The Thermodynamic Dissipation Theory for the Origin of Life, developed by Karo Michaelian and colleagues, proposes a radical but compelling idea: life originated not despite entropy, but to increase it. The fundamental molecules of life — RNA, DNA, proteins, all the molecular machinery of cells — emerged originally as structures optimized to absorb and dissipate solar ultraviolet light.
Here's the story. Four billion years ago during the Archean eon, Earth's atmosphere lacked oxygen and ozone. Intense ultraviolet-C radiation from the young Sun flooded the planet's surface, particularly wavelengths between 205 and 285 nanometers. This UV-C light carried enormous free energy — more than a thousand times all other non-photon energy sources combined.
According to thermodynamic principles, nature will spontaneously structure matter to dissipate available energy gradients. The UV-C gradient between the Sun and Earth was there to be dissipated. The question was: what molecular structures could emerge to dissipate it?
The answer: UV-C chromophores — molecules that strongly absorb ultraviolet light and rapidly convert the electronic excitation energy into heat. And remarkably, the fundamental molecules of life — nucleic acids, aromatic amino acids, cofactors — are all exceptional UV-C chromophores. They absorb maximally near 260 nanometers, exactly matching the peak of UV-C light that would have reached Archean Earth's surface.
Even more remarkably, these molecules dissipate absorbed photon energy with extraordinary efficiency, in less than a picosecond, through quantum mechanical pathways called conical intersections. The excitation energy is converted directly to molecular vibrations — heat — with minimal risk of destructive photochemistry.
This is no accident. Under the thermodynamic imperative to maximize entropy production, simple organic molecules in the primordial ocean would have been selected — not by Darwinian selection yet, but by thermodynamic selection — for their ability to dissipate UV light. Those molecules that absorbed and dissipated UV-C most effectively would proliferate, not because they were "alive," but because they were thermodynamically favored.
Gradually, these molecular photon dissipators would have coupled together into more complex structures. UV-absorbing molecules linked with surface-anchoring molecules to remain at the ocean interface where light was abundant. Photon dissipation became coupled with chemical synthesis, as the energy from absorbed UV drove the formation of new bonds. Autocatalytic cycles emerged — reaction networks where products catalyze their own formation from simpler precursors.
Eventually, these dissipative molecular systems developed the ability to replicate. RNA, with its dual capacity to store information and catalyze reactions, may have been the critical breakthrough. Recent research shows that diurnal temperature cycling — the daily heating and cooling from sunlight — could drive RNA replication without enzymes, through purely thermodynamic processes. The system became self-propagating.
And then — this is the crucial transition — thermodynamic selection gave way to Darwinian selection. Replicating molecular systems that dissipated photons more effectively left more copies. Evolution by natural selection began, but founded on and forever constrained by thermodynamic imperatives.
Life today remains fundamentally a dissipative structure. We are thermodynamic necessities, emergent phenomena that enhance the entropy production of the Earth-Sun system. Every living organism, from bacteria to whales, exists by absorbing low-entropy energy and dissipating it as high-entropy heat. We are complexly organized channels for energy flow, patterns in the cascade from order to disorder.
The more developed the biosphere becomes — greater biomass, more complex food webs, more elaborate organisms — the more efficiently it dissipates solar radiation. Evolution has a thermodynamic direction: toward greater entropy production.
Thermodynamic Dissipation Theory (Michaelian & colleagues)
The fundamental molecules of life — nucleic acids, aromatic amino acids, cofactors — are all exceptional UV-C chromophores that absorb maximally near 260 nm, exactly matching the peak of UV-C radiation that reached early Earth. Life did not emerge despite thermodynamics. It emerged because of it.
Part Four: From Dissipation to Basal Cognition and Nervous Systems
Once life emerged as a dissipative structure, evolution could begin sculpting it into ever more sophisticated forms. But all these forms remained constrained by thermodynamic principles. Every structure, every process, every capability had to be paid for in energy.
But before we arrive at nervous systems, we need to pause and question an assumption that dominated biology for most of the twentieth century. We assumed, almost without debate, that goal-directed behavior, adaptive response, and information processing required nervous systems. That assumption is now under serious and mounting challenge. And the challenge strengthens our thermodynamic story considerably.
Consider Physarum polycephalum, the slime mold. It is a single-celled organism. Not a colony of cells. One cell, without a single neuron. When researchers at Hokkaido University placed food at the locations of Tokyo's major train stations and allowed this organism to grow through a nutrient medium, it spontaneously constructed a network of transport tubes strikingly similar to Tokyo's actual rail system. An optimized network balancing efficiency, redundancy, and resilience, produced without a nervous system, without a brain, without any central coordination whatsoever. Human engineers spent decades refining that rail network. The slime mold reproduced its essential logic overnight. The same organism has been shown to exhibit anticipatory behavior. When exposed to recurring intervals of cold, dry air, it begins to contract preemptively just before the next interval arrives. It is not reacting. It is anticipating. Without neurons, it has learned the rhythm of its environment and is responding to what has not yet happened.
Michael Levin at Tufts University has built perhaps the most systematic challenge to our neurocentric assumptions about cognition. His work on developmental bioelectricity reveals that all cells — not just neurons — communicate using electrical signals across their membranes. These bioelectric networks encode positional information, guide the growth and repair of body structures, and coordinate collective decision-making across tissues. Levin's team has demonstrated that planaria, small flatworms, retain memories after being decapitated and regrowing a complete new head. The memory was not stored in the brain alone. It persisted in the bioelectric field patterns of the body. In another series of experiments, embryos with their facial feature genes scrambled managed to self-correct and develop normal anatomy, navigating toward a target body plan through flexible problem-solving rather than rigid genetic instruction. Levin calls this basal cognition. Goal-directed, adaptive, information-processing behavior distributed across living tissue, operating hundreds of millions of years before the first neuron appeared.
Plants integrate information across time and space without a single neuron. Research has demonstrated that some plant species exhibit habituation, a basic form of learning in which a repeated harmless stimulus is progressively ignored, conserving metabolic resources. Damage signals propagate electrically through vascular tissue, coordinating whole-plant responses within minutes. The Venus flytrap counts. It requires two mechanical triggers within a precise time window before snapping shut, a biological logic gate built from calcium signaling rather than action potentials. And at the microbial scale, bacteria engage in quorum sensing, releasing and detecting molecular signals to collectively assess whether their population density is sufficient to mount a biofilm or coordinate an infection. This is distributed computation at the scale of chemistry itself.
What all of this tells us, thermodynamically, is that information processing and goal-directed behavior are not the invention of nervous systems. They are properties that emerge from the fundamental challenge facing any living dissipative structure. To persist, a far-from-equilibrium system must model its environment, anticipate gradients, and act. These imperatives do not wait for neurons. They are present from the very beginning of life. The nervous system is an extraordinarily efficient and powerful solution to that challenge. But it is one solution among many that life discovered across four billion years of thermodynamic creativity. The nervous system did not create mind. It accelerated it.
This is where nervous systems enter the story. At some point in evolutionary history, probably over five hundred million years ago, organisms faced a critical challenge: to survive in complex, changing environments, they needed to process information — to detect predators, find food, navigate terrain, coordinate movement. But information processing is expensive.
Consider the costs. Neural computation requires maintaining electrochemical gradients across cell membranes, generating action potentials, releasing and recycling neurotransmitters, synthesizing proteins for synaptic plasticity. Recent research reveals that communication between neurons costs approximately thirty-five times more energy than computation itself. Axonal resting potentials — just maintaining the readiness to send signals — consumes enormous energy.
A neuron transmitting information costs millions of ATP molecules per bit. The human brain, representing only two percent of body mass, consumes twenty percent of the body's energy at rest. This is metabolically outrageous. How did such expensive tissue evolve? And the picture is richer than neurons alone. Growing evidence implicates glial cells, particularly astrocytes, long dismissed as mere support tissue, as active participants in information processing and the regulation of neural function. The brain is a more distributed system than the textbooks once told us.
The answer is the same as always: because it enhanced survival through more efficient energy acquisition. A nervous system, despite its costs, allows an organism to find energy sources more effectively and avoid energy losses more reliably. The return on investment exceeds the cost.
But here's the critical point: evolution under these metabolic constraints shaped nervous systems to be as efficient as possible. Every aspect of neural architecture reflects energy optimization. Sparse coding emerged — only a small fraction of neurons active at any time, minimizing the number of expensive action potentials while preserving information content. Neurons evolved to use minimal charge overlap during action potentials, reducing the energy needed to restore ion gradients. Synaptic transmission evolved to be probabilistic, with neurotransmitter release often failing, conserving energy when signals aren't critical. The brain's wiring follows minimum wire length principles, reducing the metabolic cost of long-distance communication.
And most fundamentally: prediction emerged as the brain's core strategy.
The nervous system did not create mind. It accelerated it.
Basal Cognition: Key Examples
Physarum polycephalum (slime mold): solved Tokyo rail optimization problem overnight; exhibits anticipatory behavior without neurons. Planaria (flatworms): retain memories after decapitation and head regrowth, stored in bioelectric field patterns (Levin Lab). Venus flytrap: biological logic gate — requires two triggers within a precise time window. Bacteria: quorum sensing = distributed computation at the scale of chemistry.
Part Five: Prediction as Thermodynamic Necessity
This brings us back to Karl Friston's Free Energy Principle, but now we can see it in its deeper thermodynamic context. The Free Energy Principle isn't just a theory of brain function. It's a special case of the general thermodynamic principle governing all dissipative structures, all self-organizing systems far from equilibrium.
Here's the key insight: any system that maintains its organization despite the second law — any living organism, any brain — must minimize the entropy of its sensory states. It must avoid surprising sensory inputs that would indicate it's dissolving into the environment.
Mathematically, this is equivalent to minimizing variational free energy, which bounds surprise. A system that minimizes free energy is a system that maintains tight coupling between its internal states and the external causes of its sensory input. It's a system that successfully predicts its sensory experience.
Why does prediction save energy? Because predicted sensory input requires minimal processing. If your brain already expects what it's about to sense, it doesn't need to fully process the incoming data. It only needs to process prediction errors — the differences between expectation and reality. This dramatically reduces the information that must be transmitted through the neural hierarchy.
Recent neuroimaging studies confirm this. When sensory input matches predictions, neural activity is actually suppressed in sensory regions. The prediction "explains away" the input. Only unpredicted features generate strong neural responses, prediction error signals that propagate upward to update the brain's model.
This predictive processing architecture emerges naturally from thermodynamic optimization. A brain that processes only prediction errors rather than all sensory data can achieve equivalent functionality with vastly lower metabolic cost. The pressure of energy efficiency, operating over hundreds of millions of years of neural evolution, has sculpted brains into prediction machines.
One thing worth being clear about before we go further. The Free Energy Principle uses a quantity called variational free energy, an information-theoretic measure of how well a system's internal model accounts for its sensory experience. This is related to, but not the same thing as, classical thermodynamic free energy — the kind that governs heat engines and chemical reactions. Friston's framework is best understood as a powerful model of how prediction, learning, and self-maintenance fit together. The deeper thermodynamic connections are real and being actively explored, but the science hasn't fully settled on how tight that link is. Being honest about that actually makes the framework more useful, not less. It stands on its own terms. And those terms are genuinely compelling.
And here's what makes this even more striking. Friston's Free Energy Principle doesn't begin with brains. It applies to any living system that maintains itself against entropy. A bacterium navigating a chemical gradient is minimizing surprise. A slime mold retracting from a dead-end path is minimizing surprise. A plant adjusting its stomata before a drought arrives is minimizing surprise. The mathematics is indifferent to the substrate. It only asks whether the system is reducing the gap between what it predicts and what it encounters, because closing that gap is what staying alive requires. Nervous systems didn't discover the Free Energy Principle. They inherited it from four billion years of life that had already been running it on simpler hardware.
But prediction requires a model — an internal representation of how the world works, of the statistical regularities and causal structures in the environment. Building and maintaining this model costs energy too. So the brain faces a fundamental tradeoff: invest energy in building accurate models that minimize prediction errors, or save energy and tolerate larger errors.
Evolution has found the optimal balance, and it's embodied in the mathematics of the Free Energy Principle. The brain minimizes a quantity that combines prediction error (accuracy) with model complexity (efficiency). Complex models that perfectly predict everything waste energy on unnecessary details. Simple models that poorly predict everything generate costly prediction errors. The brain navigates between these extremes, learning models that are just complex enough to predict just well enough.
This is Bayesian inference, but implemented not as abstract computation but as physical, thermodynamic process — neurons and synapses consuming ATP to update synaptic weights based on prediction errors. Learning is literally a dissipative process, burning energy to build better predictive models.
Nervous systems didn't discover the Free Energy Principle. They inherited it from four billion years of life that had already been running it on simpler hardware.
FEP: An Important Clarification
Variational free energy in Friston's framework is an information-theoretic quantity — related to, but not identical to, classical thermodynamic free energy. The Free Energy Principle is best understood as a powerful model of how prediction, learning, and self-maintenance fit together. The deeper thermodynamic connections are real and actively explored, but not yet fully established.
Part Six: From Prediction to Consciousness
Now we arrive at perhaps the deepest question: what is consciousness, and how does it relate to this thermodynamic story?
Recent theoretical work suggests that consciousness itself may be fundamentally thermodynamic. Mark Solms and Karl Friston have proposed that the phenomenal quality of consciousness — the fact that there is "something it is like" to be you — arises from the brain's management of its free energy, its uncertainty about the world.
The basic idea is this: consciousness is what it feels like to optimize precision — to allocate limited metabolic resources to process prediction errors that matter while ignoring those that don't. Consciousness is the activity of Maxwell's demon, the selection process that determines which prediction errors get attention and which are dismissed.
Think about it from a thermodynamic perspective. Your brain, like any living system, must maintain its organization against the second law of thermodynamics. It must detect and respond to threats to its integrity, to contexts where its predictions fail dangerously. These detection and response processes consume thermodynamic free energy — actual metabolic resources, ATP molecules.
The level of consciousness, the degree of arousal and wakefulness, correlates with free energy expenditure. When free energy is severely constrained — during deep sleep, anesthesia, or coma — consciousness fades. When prediction errors are large and uncertainty high — in novel, surprising, threatening situations — consciousness intensifies as the brain mobilizes resources to update its models.
The qualia of consciousness, the felt qualities of experience, may be fundamentally affective. The neuroscientist Antonio Damasio and others have argued that emotion is not separate from cognition but foundational to it. Every moment of consciousness has a hedonic tone, pleasant or unpleasant to some degree.
From the Free Energy Principle perspective, this makes thermodynamic sense. Decreasing free energy — successfully predicting and controlling the world — feels good. It signals that the organism is effectively maintaining its organization, successfully resisting entropy. Increasing free energy — encountering surprising, unpredicted events that indicate poor models or loss of control — feels bad. It signals danger to the organism's thermodynamic integrity.
Pleasure and pain, in this view, are not arbitrary additions to consciousness but intrinsic to it. They are how the optimization of free energy feels from the inside. They are the subjective face of thermodynamic imperatives.
Recent neuroimaging research supports this connection between consciousness and entropy. Studies measuring brain entropy during different states reveal that conscious, wakeful states have higher entropy than unconscious states — more possible configurations of neural activity, more dynamic flexibility. The conscious brain maintains itself far from equilibrium, in a state of high entropy production.
But there's an optimal range. Too little entropy, as in certain epileptic seizures where neural activity becomes overly synchronized, and consciousness breaks. Too much entropy, as in states of extreme delirium or psychosis, and consciousness also breaks. Consciousness requires a delicate balance: far from equilibrium but not too far, high entropy but bounded, flexible but not chaotic. This balance itself requires continuous energy expenditure. Consciousness is metabolically expensive.
But here's a line worth drawing carefully, especially after everything we covered in Part Four. The growing literature on non-neural intelligence tells us that cognition is far broader than consciousness. Slime molds, plants, bacteria — they adapt, remember, anticipate, and act without neurons. That's genuine cognition. But that doesn't mean they're conscious. Consciousness, in this framework, appears to be a specialized and metabolically expensive form of biological intelligence — one that emerged in particular kinds of hierarchical, self-modeling systems. It's not the origin of cognition. It may be cognition's highest expression so far. That distinction matters, and it's one the thermodynamic picture actually helps clarify.
Pleasure and pain are how the optimization of free energy feels from the inside.
Cognition ≠ Consciousness
The growing literature on non-neural intelligence shows that cognition — adaptive, goal-directed, information-guided behavior — is far broader than consciousness. Slime molds, plants, and bacteria exhibit genuine cognition. Consciousness appears to be a specialized, metabolically expensive form of biological intelligence that emerged in particular kinds of hierarchical, self-modeling systems. It is not the origin of cognition. It may be cognition's highest expression so far.
Part Seven: Learning as Dissipative Structuring of Mind
Now we can finally understand learning in its full thermodynamic context. Learning isn't something that happens in brains as an add-on feature. Learning is what it means for a dissipative structure to adapt to changing environmental conditions.
Consider the parallel to the origin of life. Life emerged when molecular structures spontaneously organized to dissipate UV light. Once replication entered the picture, evolution could select for structures that dissipated more effectively. The system learned, in a sense, through the selection of better energy dissipators.
Nervous systems do something similar but on a vastly faster timescale. Instead of waiting generations for random mutations to be selected, neural systems can update their synaptic weights within seconds to minutes based on prediction errors. Learning is evolution sped up, operating within a lifetime rather than across lifetimes.
But it's still fundamentally thermodynamic. Every synaptic modification, every long-term potentiation or depression, requires energy expenditure — protein synthesis, receptor trafficking, cytoskeletal reorganization. Learning literally consumes free energy to build better predictive models.
Recent research shows that synaptic plasticity is remarkably expensive, perhaps even more costly than neural computation and communication. A study in fruit flies found that trained flies, which had learned to associate stimuli, died twenty percent earlier than untrained flies when food was restricted. The energy invested in forming memories used up their reserves.
And the substrate for that learning may be broader than neurons alone. Astrocytes, the star-shaped glial cells that outnumber neurons in the brain and were long considered mere scaffolding, are increasingly implicated in memory consolidation and the modulation of synaptic strength. The machinery of learning runs deeper than synapses.
This poses a dilemma. The brain needs to learn to improve its predictions and adapt to changing circumstances. But learning is so expensive that indiscriminate learning would be metabolically catastrophic. The solution, revealed by computational modeling, is hierarchical learning with multiple timescales.
The brain implements what's been called "synaptic caching." Initial learning occurs in transient, metabolically cheap forms — changes in synaptic efficacy that don't require protein synthesis. These temporary traces persist just long enough to test whether the learned association is reliable and important. If prediction errors persist, indicating the learning is stable and valuable, then the brain invests in metabolically expensive consolidation, synthesizing new proteins to stabilize the synaptic changes permanently. This strategy reduces energy requirements for learning by as much as tenfold.
The different timescales of learning — from rapid perceptual inference to slow structural reorganization — reflect different levels of the thermodynamic hierarchy. Fast learning at short timescales allows rapid adaptation to moment-to-moment changes. Slow learning at long timescales captures stable regularities worth the metabolic investment to encode permanently.
All of this learning is driven by prediction errors, which are themselves thermodynamic signals. A prediction error indicates a mismatch between the brain's model and reality, a breakdown in the coupling between internal states and external causes. Left unresolved, such mismatches accumulate free energy, driving the system toward higher entropy states, toward dissolution of organization.
And the precision or reliability of prediction errors modulates how much they're weighted. In noisy, uncertain situations where sensory signals are unreliable, prediction errors are discounted. In clear, high-signal situations, prediction errors are trusted and drive strong learning. This precision weighting is itself learned, through meta-learning — the brain learns not just about the world but about its own uncertainty.
The whole process — from perception through learning to behavior — forms a unified dissipative cycle. Energy flows from the environment into the organism, powers neural computation and plasticity, enables prediction and action, and is dissipated as heat. The organism maintains its organization, its boundary, its identity as a dissipative structure, through this continuous energy throughput.
Learning is the process by which this dissipative structure optimizes its coupling to its environment, tuning its internal dynamics to match external regularities, thereby minimizing the free energy that threatens its continued existence.
But this thermodynamic reality also reveals a vulnerability worth thinking hard about. Active learning is metabolically expensive. The brain, being the ruthless optimizer it is, will always take the path of least resistance when the environment allows it. And we now live in environments increasingly engineered to do exactly that — to deliver pre-digested, polished, perfectly predicted information at near-zero cognitive cost. When prediction errors stop arriving. When every question gets answered before you've had the chance to build your own model. The brain reads that as a solved problem and quietly stops investing in the machinery for solving. You end up with a domesticated mind. Apparently well-informed, comfortable, efficient at retrieving what's been handed to it. But thermodynamically fragile the moment genuine uncertainty shows up and the information stream runs dry. The brain hasn't failed in this scenario. It's doing precisely what four billion years of thermodynamic selection shaped it to do. The problem isn't the brain. The problem is that we stopped making demands of it.
The problem isn't the brain. The problem is that we stopped making demands of it.
The Domesticated Mind
Active learning is metabolically expensive. In environments engineered to deliver perfectly predicted information at near-zero cognitive cost, the brain rationally stops investing in prediction machinery. The result is a domesticated mind — well-informed and efficient at retrieval, but thermodynamically fragile when genuine uncertainty arrives and the information stream runs dry.
Part Eight: Consciousness, Learning, and Thermodynamic Depth
Let's pull it all together now. We've traced a path from fundamental thermodynamics through dissipative structures, the origin of life, the evolution of nervous systems, to consciousness and learning. What's the synthesis?
At every level, from molecules to minds, we see the same pattern: energy flows create gradients; gradients enable the emergence of organized structures; these structures exist by enhancing the dissipation of the gradients that created them; and in the process, they exhibit behaviors — self-maintenance, adaptation, reproduction, learning, consciousness — that seem to transcend simple physics but are actually sophisticated expressions of thermodynamic principles.
You are not separate from thermodynamics. You are thermodynamics, complexly organized. Your consciousness is not some mysterious substance that violates natural law. It's what energy dissipation feels like when organized at sufficient complexity.
But reductionism doesn't diminish the phenomenon. Understanding that consciousness and learning are thermodynamic doesn't make them less real or less important. If anything, it makes them more profound.
Because what we're seeing is that the universe, through the simple operation of energy flow and entropy increase, has generated systems capable of modeling themselves, of representing their own existence, of reflecting on thermodynamics itself. The universe has become aware of its own thermodynamic nature through us.
This is what the physicist Murray Gell-Mann called "thermodynamic depth" — a measure of how much thermodynamic processing went into creating a system. A random arrangement of atoms has zero depth. A carefully designed structure has some depth. But a living organism, shaped by billions of years of evolution, embodying the cumulative effects of countless thermodynamic transactions, has enormous depth.
And consciousness adds another level. A conscious organism doesn't just embody thermodynamic history — it represents it, models it, learns about it. Through consciousness, thermodynamics becomes reflexive, self-referential. The universe's tendency toward entropy has generated systems capable of understanding entropy.
This suggests something profound about the place of mind in nature. Mind isn't an accident, an inexplicable aberration in an otherwise mindless universe. Mind is what happens when thermodynamic systems become sufficiently complex to model their own modeling, to predict their own predicting. It's thermodynamics turned back on itself.
And learning — the continuous refinement of predictive models through the dissipation of free energy — is how this self-referential thermodynamics operates. Every time you learn something, you're participating in the universe's exploration of what's possible within the constraints of energy and entropy.
Thermodynamic Depth (Gell-Mann)
Thermodynamic depth measures how much thermodynamic processing went into creating a system. A living organism shaped by billions of years of evolution has enormous depth. A conscious organism adds another level: it doesn't just embody thermodynamic history — it represents it, models it, learns about it.
Part Nine: Open Questions and Future Horizons
Despite the elegant connections we've traced, enormous questions remain. The relationship between thermodynamics and consciousness is still being worked out, and there are vigorous debates.
Some critics argue that applying thermodynamic concepts to consciousness is metaphorical at best, confused at worst. They point out that thermodynamic free energy (Helmholtz free energy) is distinct from variational free energy in the Free Energy Principle. The connection isn't direct. Defenders respond that at equilibrium, these quantities converge, and while living systems are never at equilibrium, they exist in quasi-steady states where thermodynamic reasoning applies.
Others question whether dissipative structure theory can really explain something as specific and complex as consciousness. A hurricane is a dissipative structure, but it's not conscious. What makes neural dissipative structures different? This is the hard problem of consciousness rephrased in thermodynamic terms. The tentative answer is that consciousness requires specific kinds of dissipative organization — hierarchical, self-modeling, with meta-cognitive control over resource allocation.
Perhaps the deepest open frontier that recent research has uncovered concerns a question we thought we knew the answer to. Where does cognition begin? Levin's work on basal cognition, alongside the growing literature on slime mold intelligence, plant learning, and bacterial collective computation, suggests that goal-directedness and adaptive problem-solving are not late evolutionary additions to the story. They may be fundamental properties of living organization at every scale. If cognition scales continuously from simple metabolism to full consciousness, we face a profound revision to our self-understanding. We are not a strange anomaly at the top of a mostly mindless hierarchy. We are the most elaborate current expression of a cognitive impulse that has been running since life began.
Which brings up a question that reframe makes unavoidable. If slime molds can optimize networks and bacteria can navigate gradients and plants can integrate information across time, what exactly do nervous systems add? The answer: nervous systems didn't invent goal-directedness. They scaled it. What neurons provide is a massively parallel, high-bandwidth architecture capable of compressing thousands of prediction-and-action cycles into milliseconds, in bodies that are large, mobile, and navigating environments that shift faster than chemical diffusion can track. The slime mold has time. The gazelle with a lion behind it does not. Nervous systems are what thermodynamic selection built when the environment started changing faster than the old solutions could respond. They're the next order of the same imperative, running at a different speed.
Which means we need to be careful in both directions when thinking about artificial intelligence and consciousness. We shouldn't assume a system is conscious just because it behaves intelligently — behavior and experience are not the same thing. But we also shouldn't assume a system can't be conscious simply because it's built from silicon rather than carbon. The decisive factors may turn out to involve architecture, embodiment, self-maintenance, and the nature of the system's ongoing coupling with its environment, not any single hardware label. These are genuinely open questions, and intellectual honesty demands we hold them open.
There are also deep questions about the relationship between thermodynamics and quantum mechanics. Does quantum coherence play a role in biological information processing? Some researchers argue yes, though most remain skeptical. And there's the cosmological question: is life and consciousness inevitable given thermodynamic principles? The Thermodynamic Dissipation Theory predicts that life should be common on planets around G-type and K-type stars with appropriate atmospheres. But complex life with nervous systems and consciousness? Thermodynamics may provide the direction and constraints, but evolution explores the space of possibilities unpredictably.
Open Questions
1. How tight is the link between variational and thermodynamic free energy in living systems? 2. What are the minimal organizational requirements for consciousness — not just cognition? 3. Is goal-directed behavior a universal property of dissipative living systems, scaling continuously from metabolism to consciousness? 4. Are the decisive factors in machine consciousness architectural, embodied, dynamic — or something else entirely?
Part Ten: Implications and Meaning
So what does all this mean? What follows from understanding consciousness and learning as thermodynamic phenomena?
First, it suggests a fundamental continuity in nature. There's no sharp boundary between living and non-living, between mind and matter, between consciousness and the rest of the universe. These are different levels of thermodynamic organization, continuous with each other. Life isn't a mysterious exception to physical law — it's a spectacular expression of physical law. Consciousness isn't separate from nature — it's nature becoming aware of itself.
Second, it suggests that learning and knowledge have thermodynamic value in a literal sense. Your brain's models, your accumulated knowledge, represent free energy that was invested in building them. Every fact you know, every skill you have, every insight you've gained, cost energy to acquire. Knowledge is thermodynamically expensive. But it's worth the cost when it reduces future free energy expenditure by improving predictions — the return on investment is positive.
Third, it suggests ethical implications. If consciousness is fundamentally about the capacity to suffer and flourish in relation to free energy, then welfare — human and animal — has thermodynamic dimensions. Suffering may be literally the experience of increasing free energy, of predictions failing, of losing control. This could ground ethics in biology and physics rather than arbitrary human preferences. The possibility of a naturalistic foundation for ethics is profound.
Fourth, it changes how we think about AI and machine consciousness. If consciousness requires specific thermodynamic organization — continuous energy flow, hierarchical self-modeling, flexible resource allocation — then not all computational systems will be conscious, regardless of their functional capabilities. A system that merely simulates conscious processes without embodying the right thermodynamic dynamics wouldn't be conscious, any more than a computer simulation of a hurricane is wet. The right question isn't "is it intelligent?" but "does it implement the right thermodynamic organization?"
Finally, it gives us a new perspective on human nature and our place in the cosmos. We are not separate from the physical universe, observing it from outside. We are the universe in a particular state of self-organization — a state where energy dissipation has become so sophisticated that it generates experience, agency, understanding. When you learn something, you're not defying entropy — you're expressing it at a higher level of organization. When you're conscious, you're not transcending physics — you're physics doing something extraordinarily subtle and complex.
The mystery of consciousness isn't that it violates natural law. The mystery is that natural law, operating over billions of years on a planet bathed in sunlight, could generate something capable of contemplating natural law. The mystery is that energy flow, entropy increase, and dissipative structuring can produce beings who wonder about energy, entropy, and structure.
Conclusion: Energy, Awareness, and the Depths We Contain
You are twenty watts of consciousness, built from four billion years of dissipative structuring.
Let me end where we began: with energy, that fundamental, ubiquitous, mysterious quantity.
We still don't know what energy "really is" in some ultimate metaphysical sense. But we've traced its flow through the universe, from the nuclear fusion of stars through the empty void of space, absorbed by planets, driving chemical reactions, organizing molecules, powering cells, firing neurons, generating consciousness, enabling thought.
And that flow, that cascade from order to disorder, from low entropy to high entropy, is not mere dissipation in the colloquial sense of waste. It's creative dissipation. It builds as it destroys. It organizes as it spreads.
You are a dissipative structure, like a hurricane or a candle flame, maintained by energy flowing through you. But unlike hurricanes or flames — and unlike even the remarkable goal-directed slime molds and plants that found their own ways to navigate this same thermodynamic imperative — you've developed the capacity to model the flow, to predict it, to reflect on it. You've become a knot in the energy flow that knows it's a knot in the energy flow.
Your learning is the process by which this knot refines its form to better channel the flow. Each thing you learn, each skill you master, each insight you gain, is a thermodynamic optimization — a restructuring of neural organization to minimize future free energy through better prediction.
Your consciousness is what it feels like to be this kind of dissipative structure, maintaining itself far from equilibrium, constantly managing uncertainty, allocating limited resources to process what matters. The qualia of consciousness — the redness of red, the painfulness of pain, the joyfulness of joy — are how thermodynamic imperatives present themselves to sophisticated predictive systems like us.
We are thermodynamics made conscious. We are energy flow that has learned to model energy flow. We are the universe's way of knowing itself.
And perhaps most remarkably, this understanding doesn't diminish the preciousness of consciousness or the value of learning. If anything, it deepens them. To be conscious, to learn, to grow in understanding — these aren't trivial additions to a meaningless physical universe. They're expressions of the universe's deepest tendencies, the culmination of billions of years of thermodynamic creativity.
The thread runs unbroken from the fundamental equations of statistical mechanics through dissipative structures, the origin of life, the evolution of nervous systems, to you, reading these words, understanding this story. It's all one process — energy flowing, entropy increasing, and in the flow, briefly, beautifully, consciousness arising.
You are twenty watts of consciousness, built from four billion years of dissipative structuring, carrying forward the thermodynamic imperative that created you, learning to predict and navigate a complex world, aware of your own awareness. You are not separate from energy — you are what energy becomes when given enough time and the right conditions.
And you, experiencing this moment of understanding, are proof that energy and entropy can do something absolutely extraordinary: they can wonder about themselves.
For questions, feedback, or further exploration:
Email: mcfadden@snet.net | Web: McFaddenCAE.com
References and Further Reading
The following references support the scientific claims made in this essay and provide pathways for deeper exploration. Entries are grouped by the order in which they become relevant to the narrative.
Thermodynamics and Dissipative Structures
1. Prigogine, I., & Stengers, I. (1984). Order Out of Chaos: Man's New Dialogue with Nature. Bantam Books. — The foundational text on dissipative structures and non-equilibrium thermodynamics.
2. Prigogine, I. (1977). Nobel Lecture: Time, Structure and Fluctuations. Nobel Prize in Chemistry. — Prigogine's own summary of dissipative structure theory.
Dissipation-Driven Adaptation
3. England, J. L. (2015). Dissipative adaptation in driven self-assembly. Nature Nanotechnology, 10, 919–923. https://doi.org/10.1038/nnano.2015.250 — The mathematical basis for dissipation-driven adaptation and proto-learning in matter.
Thermodynamic Origin of Life
4. Michaelian, K. (2011). Thermodynamic dissipation theory for the origin of life. Earth System Dynamics, 2, 37–51. — Core paper for the UV-C dissipation theory of life's origins.
5. Michaelian, K., & Simeonov, A. (2015). Fundamental molecules of life are pigments which arose and co-evolved as a response to the thermodynamic imperative of dissipating the prevailing solar spectrum. Biogeosciences, 12, 4913–4937.
The Free Energy Principle
6. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138. https://doi.org/10.1038/nrn2787 — The foundational paper on the Free Energy Principle.
7. Friston, K. J., Wiese, W., & Hobson, J. A. (2021). Sentience and the origins of consciousness: From cartesian duality to Markovian monism. Entropy, 23(5), 552. — Extension of FEP to consciousness.
8. Solms, M. (2021). The Hidden Spring: A Journey to the Source of Consciousness. Norton. — Solms on the affective, thermodynamic basis of consciousness.
9. Rao, R. P. N., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2, 79–87. — Foundational paper on predictive processing in visual cortex.
10. Clark, A. (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press. — Comprehensive treatment of predictive processing and the active inference framework.
Basal Cognition and Non-Neural Intelligence
11. Nakagaki, T., Yamada, H., & Tóth, Á. (2000). Maze-solving by an amoeboid organism. Nature, 407, 470. https://doi.org/10.1038/35035159 — The original slime mold maze experiment.
12. Tero, A., Takagi, S., Saigusa, T., et al. (2010). Rules for biologically inspired adaptive network design. Science, 327(5964), 439–442. https://doi.org/10.1126/science.1177894 — The Tokyo rail network experiment with Physarum.
13. Saigusa, T., Tero, A., Nakagaki, T., & Kuramoto, Y. (2008). Amoebae anticipate periodic events. Physical Review Letters, 100, 018101. — Slime mold anticipatory behavior without neurons.
14. Levin, M. (2022). Bioelectric networks: The cognitive glue enabling evolutionary scaling from physiology to mind. Animal Cognition. https://doi.org/10.1007/s10071-023-01780-3 — Levin's comprehensive framework for basal cognition and bioelectric intelligence.
15. Levin, M. (2019). The computational boundary of a "self": Developmental bioelectricity drives multicellularity and scale-free cognition. Frontiers in Psychology, 10, 2688. — Foundational paper on the cognitive boundary of biological systems.
16. McMillen, P., & Levin, M. (2024). Collective intelligence: A unifying concept for integrating biology across scales and substrates. Communications Biology, 7, 378. — Recent synthesis of collective intelligence across biological scales.
17. Gagliano, M., Renton, M., Depczynski, M., & Mancuso, S. (2014). Experience teaches plants to learn faster and forget slower in environments where it matters. Oecologia, 175, 63–72. — Plant habituation and learning.
18. Hedrich, R., & Neher, E. (2018). Venus flytrap: How an excitable, carnivorous plant works. Trends in Plant Science, 23(3), 220–234. — The Venus flytrap action potential counting mechanism.
19. Miller, M. B., & Bassler, B. L. (2001). Quorum sensing in bacteria. Annual Review of Microbiology, 55, 165–199. — Foundational review of bacterial quorum sensing as distributed computation.
Consciousness and Affect
20. Damasio, A. (1994). Descartes' Error: Emotion, Reason and the Human Brain. Putnam. — Damasio on emotion as foundational to cognition.
21. Gell-Mann, M., & Lloyd, S. (1996). Information measures, effective complexity, and total information. Complexity, 2(1), 44–52. — The concept of thermodynamic depth.
22. Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience, 17, 450–461. — Consciousness and information integration (context for the entropy/consciousness discussion).
Neural Energy, Learning, and Memory
23. Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow & Metabolism, 21, 1133–1145. — The neural energy budget: communication costs 35x computation.
24. Jamadar, S. D. (2020). Metabolic and hemodynamic resting-state connectivity of the human brain: A high-temporal resolution simultaneous BOLD-fMRI and CMRO2 study. NeuroImage, 216, 116791. — Metabolic costs of cognition at rest.
25. Bhatt, D. L., & Bhatt, D. (2011). Costly synapses. The energy cost of synaptic plasticity: Implications for metabolic demands of memory. Neuroscience & Biobehavioral Reviews, 35, 2030–2037.
26. Mery, F., & Kawecki, T. J. (2005). A cost of long-term memory in Drosophila. Science, 308(5725), 1148. https://doi.org/10.1126/science.1111331 — The fruit fly memory-energy trade-off study.
27. Bhatt, D., & Bhatt, D. (2009). Dendritic spine dynamics. Annual Review of Physiology. — Synaptic caching and hierarchical consolidation.
28. Volterra, A., & Meldolesi, J. (2005). Astrocytes, from brain glue to communication elements: The revolution continues. Nature Reviews Neuroscience, 6, 626–640. — Astrocytes as active participants in neural computation and learning.
29. Bazargani, N., & Bhatt, D. (2016). Astrocyte calcium signaling: The third wave. Nature Neuroscience, 19, 182–189. — Astrocyte involvement in memory and learning consolidation.
Building Intuition Before Equations
The Holistic Analyst | McFaddenCAE.com